Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Quantum complexity measures the difficulty of realizing a quantum process, such as preparing a state or implementing a unitary. We present an approach to quantifying the thermodynamic resources required to implement a process if the process’s complexity is restricted. We focus on the prototypical task of information erasure, or Landauer erasure, wherein an -qubit memory is reset to the all-zero state. We show that the minimum thermodynamic work required to reset an arbitrary state in our model, via a complexity-constrained process, is quantified by the state’s . The complexity entropy therefore quantifies a trade-off between the work cost and complexity cost of resetting a state. If the qubits have a nontrivial (but product) Hamiltonian, the optimal work cost is determined by the . The complexity entropy quantifies the amount of randomness a system appears to have to a computationally limited observer. Similarly, the complexity relative entropy quantifies such an observer’s ability to distinguish two states. We prove elementary properties of the complexity (relative) entropy. In a random circuit—a simple model for quantum chaotic dynamics—the complexity entropy transitions from zero to its maximal value around the time corresponding to the observer’s computational-power limit. Also, we identify information-theoretic applications of the complexity entropy. The complexity entropy quantifies the resources required for data compression if the compression algorithm must use a restricted number of gates. We further introduce a , which arises naturally in a complexity-constrained variant of information-theoretic decoupling. Assuming that this entropy obeys a conjectured chain rule, we show that the entropy bounds the number of qubits that one can decouple from a reference system, as judged by a computationally bounded referee. Overall, our framework extends the resource-theoretic approach to thermodynamics to integrate a notion of , as quantified by . Published by the American Physical Society2025more » « lessFree, publicly-accessible full text available March 1, 2026
- 
            Abstract Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. In this work, we show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms, scaling as$${{{{{{{\mathcal{O}}}}}}}}({T}^{2}\times {{{{{{{\rm{polylog}}}}}}}}(n))$$ , wherenis the size of the models andTis the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse, with small learning rates. Based on earlier efficient quantum algorithms for dissipative differential equations, we find and prove that similar algorithms work for (stochastic) gradient descent, the primary algorithm for machine learning. In practice, we benchmark instances of large machine learning models from 7 million to 103 million parameters. We find that, in the context of sparse training, a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows solidly that fault-tolerant quantum algorithms could potentially contribute to most state-of-the-art, large-scale machine-learning problems.more » « lessFree, publicly-accessible full text available December 1, 2025
- 
            Detection of very weak forces and precise measurement of time are two of the many applications of quantum metrology to science and technology. To sense an unknown physical parameter, one prepares an initial state of a probe system, allows the probe to evolve as governed by a Hamiltonian H for some time t, and then measures the probe. If H is known, we can estimate t by this method; if t is known, we can estimate classical parameters on which H depends. The accuracy of a quantum sensor can be limited by either intrinsic quantum noise or by noise arising from the interactions of the probe with its environment. In this work, we introduce and study a fundamental trade-off, which relates the amount by which noise reduces the accuracy of a quantum clock to the amount of information about the energy of the clock that leaks to the environment. Specifically, we consider an idealized scenario in which a party Alice prepares an initial pure state of the clock, allows the clock to evolve for a time that is not precisely known, and then transmits the clock through a noisy channel to a party Bob. Meanwhile, the environment (Eve) receives any information about the clock that is lost during transmission. We prove that Bob’s loss of quantum Fisher information about the elapsed time is equal to Eve’s gain of quantum Fisher information about a complementary energy parameter. We also prove a similar, but more general, trade-off that applies when Bob and Eve wish to estimate the values of parameters associated with two noncommuting observables. We derive the necessary and sufficient conditions for the accuracy of the clock to be unaffected by the noise, which form a subset of the Knill-Laflamme error-correction conditions. A state and its local time-evolution direction, if they satisfy these conditions, are said to form a metrological code. We provide a scheme to construct metrological codes in the stabilizer formalism. We show that there are metrological codes that cannot be written as a quantum error-correcting code with similar distance in which the Hamiltonian acts as a logical operator, potentially offering new schemes for constructing states that do not lose any sensitivity upon application of a noisy channel. We discuss applications of the trade-off relation to sensing using a quantum many-body probe subject to erasure or amplitude-damping noise.more » « less
- 
            Abstract The complexity of quantum states has become a key quantity of interest across various subfields of physics, from quantum computing to the theory of black holes. The evolution of generic quantum systems can be modelled by considering a collection of qubits subjected to sequences of random unitary gates. Here we investigate how the complexity of these random quantum circuits increases by considering how to construct a unitary operation from Haar-random two-qubit quantum gates. Implementing the unitary operation exactly requires a minimal number of gates—this is the operation’s exact circuit complexity. We prove a conjecture that this complexity grows linearly, before saturating when the number of applied gates reaches a threshold that grows exponentially with the number of qubits. Our proof overcomes difficulties in establishing lower bounds for the exact circuit complexity by combining differential topology and elementary algebraic geometry with an inductive construction of Clifford circuits.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
